perm filename NEW.MSS[RDG,DBL] blob sn#704363 filedate 1983-02-04 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00055 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00005 00002	@Section[Introduction]
C00007 00003	**** What of this? *****
C00009 00004	@Comment[Reading Guide]
C00012 00005	Wedge this in:
C00013 00006	@Section[New with respect to a Theory]
C00024 00007	*** From LONG ***
C00028 00008	@Comment[Semantic N]
C00031 00009	@Comment["Syntactic" N]
C00034 00010	@BoldWord[Equivalence]
C00049 00011	@Chapter["New" with respect to a Symbol (and a Theory)]
C00058 00012	@Section[Conjecture #1: Syntactic Method]
C00064 00013	@Section[Conjecture #2: Reduction of Interpretations]
C00071 00014	@BoldWord[Analysis]
C00075 00015	*** Should this section be eliminated? ***
C00079 00016	*** From V1.MSS ***
C00081 00017	To illustrate the following points, let's re-consider 
C00084 00018	What will happen when an independent @T<@g[s]> is added to @T<Th>?
C00086 00019	*** From V1.MSS ***
C00090 00020	Take, for example, the sentence @T<A@K[IFF]B>
C00092 00021	@BoldWord[Conjecture]
C00095 00022	The sentence @T<B> provides an example of this process.
C00098 00023	@BoldWord[Failings]
C00103 00024	**** from LONG *****
C00112 00025	@Section[Conjecture #4: Partial Interpretations]
C00118 00026	*** from V1 ***
C00124 00027	This, too, can be visualized with a tableau --
C00126 00028	@BEGIN[Tableau]
C00129 00029	The other difference from the @Ref[G#3]-type tableau
C00132 00030	@BoldWord[Conjecture]
C00133 00031	*** From LONG ***
C00134 00032	Returning to this example,
C00138 00033	@Section[Analysis]
C00140 00034	@Chapter[Predicate Calculus Case]
C00144 00035	@Section[Simplifying assumptions]
C00149 00036	@Section[Defn of (Predicate Calculus) Newness]
C00151 00037	@Chapter[Connection to Syntactic Methods]
C00152 00038	@Chapter[Conclusion]
C00161 00039	@Section[Observations]
C00162 00040	@BoldWord[Non Universality]
C00169 00041	@BoldWord[Non Commutative]
C00172 00042	@BoldWord[Applicability: ]
C00174 00043	@Section[Usefulness]
C00177 00044	@BoldWord[Uses: ]
C00181 00045	@BoldWord[Uses: ]
C00184 00046	*** from V1 ***
C00186 00047	Uses:
C00190 00048	@BoldWord[Relation, not Function]
C00194 00049	*** from LONG ***
C00199 00050	@BoldWord[Intensional, not Extensional: ]
C00203 00051	@Section[Future Work]
C00219 00052	@TEXT1{@SubHeading[Acknowledgments]
C00220 00053	@Comment{ Put this somewhere!!!
C00232 00054	@BEGIN[Comment]
C00235 00055	Still to read:
C00244 ENDMK
C⊗;
@Section[Introduction]
@Comment{From IJCAI}

@Comment[Motivation]
The central process in any learning experience
is the incorporation of a @i[new] fact into an existing theory.
Often the goal of that process is more specific,
to learn some new fact about some object.
But what does it mean to claim that a sentence is @i{new};
and, in particular, what qualifies as a novel fact about some object?
Despite the vast interest in learning and the abundance of related papers,
(c.f. @Cite[Critic], @Cite[Diet/Mikow] -- add others)
no one has rigorously defined what it means to be "new",
either in general or with respect to a single object.

This paper attempts to fill that gap.
Our goal is to obtain
a @i[semantic] rather than @i[syntactic] understanding of novelty.
This preference stems from our belief that a semantic account 
(one based on the possible interpretations of the theory)
provides more insight into the phenomenon of novelty.
It also means we may be able
to generalize these results to other logics and languages
beyond first-order predicate calculus.

**** What of this? *****
Our goal is a general definition of "novelty",
a formal specification which indicates whether
a sentence relays new information about some object.
Our approach, 
like that of many other papers with the task of trying to formalize 
some commonly used term,
(c.f. @Cite[NaivePhysics], @Cite[WhatLink], @Cite[WhatConcept],
@Cite[Turing])
is to employ a set of simple, uncontroversial examples, (and non-examples,)
which illustrate the desired characteristics of the topic,
(here novelty).
The cases often illustrate the inadequacy of 
@i{a priori} reasonable" first guesses,
and leads to the needed major refinement.
Fortunately, 
in this case
the result
of the final iteration 
is an elegant and simple formulation 
which not only "feels" correct,
but can be shown to be both necessary and sufficient
(given a small set of intial assumptions).

@Comment[Reading Guide]
This paper deals with two kinds of novelty.
Section @Ref[TheoryAlone] discusses the first kind of novelty,
newness of a sentence with respect to a theory.
Section @Ref[PropCase] then addresses the 
more difficult task of determining when a sentence conveys something
new about a particular object.
While the first kind of novelty is fairly easy to capture,
the second requires a consideration of the ways in which new information can
be propogated within a theory.@Foot{
This is demonstrated by a series of 
simple, uncontroversial examples (and non-examples),
each demanding an increasingly more comprehensive definition.
This approach is in the style of other papers whose goal
is the formalization of some commonly used term,
c.f. @Cite[NaivePhysics], @Cite[WhatLink], @Cite[WhatConcept], @Cite[Turing].}
@Comment{ what of Church's thesis?, or Plato's "The Meno??...}

***** What of this? *****
..."propositional logic case",
a system sufficiently impoverished to
permit only the "definitional" form of novelty,
thanks to its closed universe/domain and lack of relations;
while, at the same time,
rich enough to express all of the definitional novelty issues.
Chapter @Ref[PredCal] will then expand this new-ness criteria to handle
full (?first order?) predicate calculus, with equality.
It is possible to formulate many of these semantic 
(read "based on model")
ideas in terms of some "proof-procedure based" methods;
Chapter @Ref[SemInSyn] lists some relevant connections.

The concluding Section @Ref[Conclusion]
describes how our definitions can be applied
and lists many outstanding research issues.
Wedge this in:

Interested in defining (the referent of) a symbol more precisely.
Later will deal with showing new facts about an object --
that is, once an object has been defined, can still specify 
new facts about it...

New wrt object means the object has been specified,
and we want to add something new then.
@Section[New with respect to a Theory]
@Label[TheoryAlone]

This chapter will address the issue of what it means for a
sentence @g[s] to be new with respect to a theory@Foot{
In all the cases below, we will insist that a theory is a consistent,
deductively closed
(although usually not complete) set of assertions.}
@T<Th>;
this is the relation @T<N(Th,@g[s])>.
Intuitively,
we want @T<@g[s]> to be new if it (somehow) further specifies
something about the world.
Alteratively we can think of a new sentence as providing
some additional constraints @K[Dash]
which remove some possible worlds from consideration.

*** From LONG ***

The next two sections provide two definitions of newness;
one based on proof procedures, the other, on model theory.
The concluding section proves that 
(in the standard case,)
these two definitions coincide.

@Section[Proof Based Criterion]
We begin by partitioning the possible @g[s]s,
based on their relation to @T<Th>.
@BEGIN[Enumerate]

@g[s] can already be in @T[Th] --
@Tag[alreadyin]
@T<@g[s] @K[MemberOf] Th>.

@Tag[deduce]
@g[s] may be deducable from @T[Th] 
-- @T<@g[s] @K[MemberOf] DC(Th)>,@*
where @T[DC] takes a theory onto its deductive closure.@Foot{
To make this split a true partitioning, we should eliminate the deriving axioms
from the closure...}

@Tag[incons]
@g[s] can be inconsistent with @T[Th] --
@T<@K[Not]@g[s] @K[MemberOf] DC(Th)>

@Tag[indep]
Independence (or "none of the above") --
@T<@g[s],@K[Not]@g[s] @K[NotMemberOf] DC(Th)>.
@END[Enumerate]

Under which of these case(s) should a fact be considered new?
Certainly not sentences in case @Ref[alreadyin].
As the theory @T<Th> is deductively closed,
case @Ref[deduce] degrades to case @Ref[alreadyin],
and should, for the same reason, be eliminated.
We want to avoid case @Ref[incons],
as it would render the resultant
@T<Th' @K[Gets] Th@K[PlusSign]@g[s]>@Foot{
The notation @T<T @K[Gets] @g[x]> means the theory @T<T> is
assigned the deductive closure of the set, @g[x];
and
@T<Th@K[PlusSign]@g[s]> refers to the deductive closure of
@T<Th@K[Union]@K[LeftBrace]@g[s]@K[RightBrace]>.}
theory inconsistent.
(See Note @Ref[SemClos].)
Hence only members of case @Ref[indep] can be new.

The inclusion goes both ways.
Every sentence in this category,
(or at least every one constructed in the language of @T<Th>,)
is constraining.
By the definition of case @Ref[indep],
the theory derived by adding @T<@K[Not]@g[s]> to @T<Th> would be consistent,
and therefore reflect a possible world.
However,
this possibility goes away once we add @T<@g[s]>, of course.

Hence one definition of @T<N> is
@BEGIN[Equation]
@TAG[DefnN1]
N(@g[s],Th) @K[IFF] (@g[s]@K[NotMemberOf]DC(Th) @K[And] (@K[Not]@g[s] @K[NotMemberOf] DC(Th))
@END[Equation]
@Comment[Semantic N]
We will first consider a semantic (model theory based) definition of newness:
@T<@g[s]> is "new" if it eliminates some possible interpretation.@Foot{
We are only concerned with "true interpretations",
which map symbols onto a valid model.
We chose "interpretation" rather than "model"
to emphasize that we are dealing with mapping, rather than its range.
Model = Structure & Mapping}
That is, given any theory @F1[T],
in the language @F1[L],
there are a set of interpretations, 
@T<@F1[I]@-[T] @K[Eq] @K[LeftBrace]I@-[j]@K[RightBrace]>,
where each @T<I@-[j]> maps each symbol of 
@F1[L] onto objects or sets of tuples of objects in the "real world".@Foot{
In particular, this means that the ranges of different interpretations
can overlap;
and that the "universe" is fixed beforehand;
we will elaborate on this "real world"edness criterion later,
in Section @Ref[Obs].}
That is, each constant is mapped onto an object,
and each n-ary relation onto the its characteristic set --
those n-tuples which satisfy it.

Notice that
adding additional facts to a theory can only restrict the set of
possible interpretations:
@T<T@K[SubSet]T'> means that
@T<@F1[I]@-[T']@K[SubSet]@F1[I]@-[T]>.
(As "interpretation" is not well defined for inconsistent theories,
of course we insist that @T<T> and @T<T'> are each consistent.)
This leads to the proposal that
@BEGIN[Equation]
@Tag[Defn-N-Sem]
N@-[Sem](Th,@g[s]) @K[IFF]  @~
@F1[I]@-{Th@K[Union]@g[s]} @K[ProperSubSet] @F1[I]@-[Th].
@END[Equation]@Foot{
Note that @K[ProperSubSet] denotes @i[Proper] subset.}

@Comment["Syntactic" N]
Can we express this fact "syntactically",
i.e. in terms of logical deducability rather than "semantic" validity?

There are three ways a sentence @g[s] can relate to a
(deductively closed, consistent) theory @T<Th>:
@T<@g[s]> might be in @T<Th>,
@T<@K[Not]@g[s]> might be in @T<Th>,
or neither.
Under which of these case(s) should a fact be considered new?
Certainly not the first case -- clearly we would want 
@T<Th' @K[Gets] Th@K[PlusSign]@g[s]>@Foot(
The notation @T<T @K[Gets] @g[x]> means the theory @T<T> is
assigned the deductive closure of the set, @g[x];
and
@T<Th@K[PlusSign]@g[s]> refers to the deductive closure of
@T<Th@K[Union]@K[LeftBrace]@g[s]@K[RightBrace]>.)
to be larger than @T<Th>, which would NOT be the case here.
We do not, however, want @T<Th'> to be inconsistent --
which eliminates the second case.
This leaves only the third case, when @T<@g[s]> is independent of @T<Th>.
Hence we propose the syntactic criterion:

@BEGIN[Equation]
@TAG[DefnN1]
N@-[Syn](Th,@g[s]) @K[IFF] (@g[s]@K[NotMemberOf]Th) @K[And] @~
(@K[Not]@g[s]@K[NotMemberOf]Th).
@END[Equation]

@BoldWord[Equivalence]
It is a trivial application of Godel's Completeness Theorem
to see that these two definitions are equivalent
(at least in any theory whose language remains fixed).
(See @Cite[LogicTheory].)
As these are equivalent, we will hereafter simply use @T<N>.
(Sketch of proof:
@T<N@-[Syn](Th,@g[s]) @K[Implies] N@-[Sem](Th,@g[s])>:@*
Define @T<T'@K[Gets] T@K[PlusSign]@g[s]>.
As @T<@F1[I]@-[Th']@K[Subset]@F1[I]@-[Th]>,
it is sufficient to show that
@T<@F1[I]@-[Th']@K[NEq]@F1[I]@-[Th]>.
Assume that @T<@g[s]> did not eliminate any member of @T<@F1[I]@-[Th]>;
i.e. this @T<@g[s]> "just happened" to be true under all of those interpretations.
That would mean that @T<@g[s]> is true in all models of @T<Th>;
i.e. @T<Th@K[DoubleTurnstile]@g[s]>.
This means that @T<Th@K[RightTurnstile]@g[s]>,
by Godel's Completeness Theorem.
But this directly contradicts the initial assumption that
@T<@K[Not](Th @K[RightTurnstile] @g[s])>, from 
@T<N@-[Sym](Th,@g[s])>.
Hence there must be some interpretation, @T<I@-[j]>,
in which this @T<@g[s]> is false;
and this @T<I@-[j]> must therefore be thrown out of
@T<@F1[I]@-[Th']>, proving that
@T<@F1[I]@-[Th']@K[Neq]@F1[I]@-[Th]>, as desired.

@T<N@-[Sem](Th,@g[s]) @K[Implies] N@-[Syn](Th,@g[s])>:@*
As @T<@F1[I]@-[Th']@K[NEq]@F1[I]@-[Th]>,
clearly @T<Th'@K[NEq]Th>,
which means that @T<@g[s]@K[NotMemberOf]Th>.
Had @T<@K[Not]@g[s]@K[MemberOf]Th>, then @T<Th'> would be inconsistent,
@T<@F1[I]@-[Th']> would either be undefined,
or include every possible interpretation (if you insist);
and, in either case, not be contained within
@T<@F1[I]@-[Th]>. @K[QED])

The proof above demonstrates the equivalence of the proof-based and
model based definitions of newness, wrt a theory --
at least when the language @F1[L] is fixed.
That is,
@T<@F1[I]@-[Th'] @K[ProperSubSet] @F1[I]@-[Th]>, 
can only hold when
@T<@g[s],@K[Not]@g[s]@K[NotMemberOf]Th>;
and both seem to imply the assertion 
@T<@g[s]> is new -- @T<N(Th,@g[s])>.
While this independence constraint is necessary, it is far from sufficient.
There are many sentences which fit in this category,
which should not qualify as new.
For example, nothing prevents us from adding in a random sentence,
composed of new symbols which are unrelated to anything in the
current theory.
Such weak statements (seem to) convey no real meaning.

@****** Expand this point re: new symbols...
or at least defer it for the moment? (paper?) *****@*
@Chapter["New" with respect to a Symbol (and a Theory)]
@Label[PropCase]

In many situations, (see Section @Ref[Uses],)
it is not enough to realize that some assertion is new;
rather, one has to understand that it is a new fact @i{about some object}.
We therefore define the tertiary relation,
@T<New(Th,A,@g[s])>, to mean that
the assertion, @g[s], expresses
a new fact about the object,@Foot{
Actually, we will see later we are dealing with a syntactic symbol,
as opposed to its referent.},
@T<A>, with respect to the theory @T<Th>.
The "learning step" will involve adding this @g[s] sentence to the theory.

Clearly @T<New(Th,A,@g[s]) @K[Implies] N(Th,@g[s])>,
as claiming that @g[s] is a new fact about @T<A> with respect to @T<Th> 
must mean that @g[s] is new to @T<Th>.
What else must be true?
Informally, a statement is new if it helps the hearer
to specify more completely the object in question,
and/or to derive additional conclusions about that object --
deducing facts which could not have been derived without this new statement.

This chapter will cover the definitional case of @T<New>,
by presenting a series of "increasingly more nearly correct" descriptions.
For simplicity, the examples will be taken from propositional logic.
(Later we will discuss the ways this is, and is not, a limitation.)
... and
This chapter is laced with notes hinting that the predicate calculus
can be slightly different.
These comments with be discharged in the following chapter.}
Each proposal will begin with the intuition involved,
followed by a formal description of this conjecture,
and finally (in all but the final case) an example when it fails
and an analysis of this deficiency.
The final definition will be used in the next chapter,
as the framework in which to discuss newness in the 
more general predicate calculus case.

@Section[Conjecture #1: Syntactic Method]
@Label[SynMethod]

@BoldWord[Intuition]
Clearly most statements which relay information about some
object will contain that symbol itself.
This leads to the proposed syntactic solution (strawman):
@BoldWord[Conjecture]
The sentence @T<@g[s]> conveys new information about the symbol @T<A>
if the token "A" is lexically included in the string of tokens which
form @T<@g[s]>;
denoted with the opaque assertion @T<LexInclude(A,@g[s])>.
(E.g., @T<LexInclude(A,A@K[And]B)>.)
For the reasons mentioned above,
we will, of course, further insist that 
@T<@g[s]> be independent of @T<Th> --
i.e. @T<N(Th,@g[s])>.
Formally,
@BEGIN[Conjecture]
@Tag[G#1]
New@-[Syn](Th,A,@g[s]) @K[Iff] N(Th,@g[s]) @K[And] LexInclude(A,@g[s])
@END[Conjecture]

@@BoldWord[Failing]
Unfortunately this syntactic condition is neither necessary nor sufficient.
To show it is not necessary, realize that we want
@T<New(@K[LeftBrace]A@K[IFF]B@K[RightBrace],A,"B" )> to be true.
(Read this: the sentence @T<B> is new about @T<A>,
in the context of the theory @T<A@K[IFF]B>.
(After all, this means the @T<A> must now be @u<T>@Foot{
The underbar notation denotes the referents of the corresponding
linguistic symbols.},
which had not been the case before asserting @T<B>.)
@CaseForm<Case 1: "B", when "A<IFF>B">
@Label[Bis1]
Suppose
@BEGIN[Equation]
Th1 @K[Gets] @K[LeftBrace] A@K[IFF]B @K[RightBrace]
@END[Equation]
Then hearing that @T<B> (is true)
would indeed say something about @T<A>,
even though it did not include that symbol.
(E.g. we can then deduce the fact that
@T<A> is true,
which specifies the "truth referent" of the @T<A> symbol.)
That is, we would want
@T<New(Th1,A,"B")>.

(In the predicate calculus case
there is another class of examples of this,
where the size of the universe plays a role.
See Note @Ref[Univ] for details and discussion.)

To show insufficiency is a little trickier (and more subjective).
Should
@T<New( @K[LeftBrace]A@K[Or]B@K[RightBrace], A, "(@K[Not]A)@K[Or]B")>
be true?
We argue the answer is no:
in this context, 
asserting @T<(@K[Not]A)@K[Or]B> is the same as asserting @T<B>,
which we know says nothing new about @T<A>!
@CaseForm[Case 2: "(<Not>A)<Or>B", when "A<Or>B"]
Suppose
@BEGIN[Equation]
Th2 @K[Gets] @K[LeftBrace] A@K[Or]B @K[RightBrace].
@END[Equation]
In this situation, is @T<(@K[Not]A)@K[Or]B> a new fact about @T<A>?
After adding this clause to @T<Th2>,
we see that @T<A> can be assigned either @u<T> or @u<F>;
and that @T<B> must be @u<T>
Realize first that this is precisely the same as hearing that
@T<B>,
which we know says nothing about @T<A>!
So we see that
@T<(@K[Not]A)@K[Or]B> says nothing about @T<A>,
even though it contains that symbol.
As this statement conveys no new usable information about @T<A>, we want
@T<@K[Not]New(Th2,A,"(@K[Not]A)@K[Or]B")>.
@Label[AorB]
It seems clear we need a more powerful, (and, we will see, semantic,)
method for specifying novelty.

(There is a general method for doing this in the predicate calculus case,
involving the use of a relation symbol which was not in
@F1[I], the language of the original theory, @T<T>.
See the @Ref[Floogle] example, later.)

@Section[Conjecture #2: Reduction of Interpretations]
@NoIndent()(Subtitle: Model Theory, to the rescue!)}
@Label[ReduceInter]

@BoldWord[Intuition]
Recall the model theoretic notion of possible interpretations discussed
in Chapter @Ref[TheoryAlone].
We can augment the notation there to define the
"interpretation range" of a particular symbol.
Given any symbol @T<s>,
we can define the image of that symbol under an interpretation,
@T<I@-[j][s]> as that referent in the world --
here either @u[T] or @u[F].@FOOT{
Two comments:
(1) We are, for now, considering only standard models of propositional logic --
where @u[T] and @u[F] behave correctly.
(2) In the predicate calculus case, an interpretation may be an object,
or a set of tuples.
There we have to insist that some objects be specified @i[a priori],
and are therefore not subject to ambiguity.
(Feel free to assume an oracle fixes this correspondence...)}
We can similarly define the image of the set,
@T<@F1[I]@-[Th][s] @K[Eq] 
@K[LeftBrace]I@-[j][s] @K[Modulo] I@-[j]@K[MemberOf]@F1[I]@-[Th] @K[RightBrace]>.
(When it is unambiguous, we will omit the @T<Th> sub- and super-scripts.)

As in the "only wrt theory" case,
we have @T<@F1{I}@-[T'][c] @K[SubSet] @F1{I}@-[T][c]>
whenever @T<T@K[SubSet]T'>,
as additional facts can only further restrict the range of
possible interpretations for any symbol.
Given this condition we can define
@BEGIN[Conjecture]
@Tag[G#2]
New@-[FI]( T, s, @g[s] )  @K[IFF]  @F1[I]@-[T'][s] @K[ProperSubSet] @F1[I]@-[T][s],
@END[Conjecture]

@BoldWord[Conjecture]
It seems plausible to conjecture that 
(one way)
a statement, @g[s],
can contain new information with respect to a theory @T<T> and a symbol @T<s>, 
is if @T<New@-[FI](T,s,@g[s])> -- i.e.
@BEGIN[Conjecture]
@Tag[G#2-1]
New@-[FI]( T, s, @g[s] ) @K[IFF] New( T, s, @g[s] ).
@END[Conjecture]}

This @T<New@-[FI]> definition seems, at first, adequate.
In addition to paralleling the @T<N> situation,
it also resonates nicely with the ideas of 
Shannon Information Theory (@Cite[InfoTheory],)
in which information is tied to the number of
possible values of a signal.
It also works perfects on the two cases given in Section @Ref[SynMethod].

@BoldWord[Failing]
Unfortunately, this @T<New@-[FI]> requirement does not include all desired cases:
there are some sentences which do convey new information
(in the informal sense outlined above)
but which do not satisfy this constraint.
@CaseForm[Case 3: A <IFF> B]
@Label[AisB]
Start with an empty theory,
@T<Th3 @K[Gets] @K[LeftBrace]@K[RightBrace]>,
in the language whose (non-logical) constants are @T<A> and @T<B>.
There are four obvious interpretations,
depending on the values of @T<A> and @T<B>, shown in Table @Ref[ABValues].
@BEGIN[Table]
@BEGIN[Verbatim, Spacing=1.1]
@tabclear
    @↑  A   @↑B
@\@ux[@& @\  ]
I@-[0]@\| @u[F@\F]
I@-[1]@\| @u[F@\T]
I@-[2]@\| @u[T@\F]
I@-[3]@\| @u[T@\T]
@Caption[Interpretations of A and B]
@Tag[ABValues]
@END[Verbatim]
@END[Table]
By inspection @T<@F1[I]@-[Th3][A] @K[Eq] @K[LeftBrace]@un[T,F]@K[RightBrace]>.
Now form @T<Th3'> by adding the independent statement @T<A @K[IFF] B>
to @T<Th3> --
i.e. @T<Th3' @K[Gets] Th3@K[PlusSign] A@K[IFF]B>@Foot{
The notation @T<T @K[Gets] @g[x]> means the theory @T<T> is
assigned the deductive closure of the set, @g[x];
and
@T<Th@K[PlusSign]@g[s]> refers to the deductive closure of
@T<Th@K[Union]@K[LeftBrace]@g[s]@K[RightBrace]>.}.
While this only leaves two of the four interpretations, 
@T<I@-[0]> and @T<I@-[3]>, 
@T<@F1[I]@-[Th3'][A]> remains @T<@K[LeftBrace]@un[T,F]@K[RightBrace]>;
indicating that @T<A@K[IFF]B> said nothing @T<New@-[FI]> about @T<A>.

Hence we need a different specification for @T<New>
(which is why the @T<New@-[FI]> relation above had a different name).
Realize there is no need to try to improve this situation by, somehow,
combining the earlier syntactic approach with this semantic one --
Section @Ref[SynMethod] shows clearly how that will fail.
@BoldWord[Analysis]
What we need is an understanding of what the semantic approach above was missing;
why that case didn't work.
The problem arise with cases like @Ref[AisB] above.
There are two things we could do now.
We could let our @T<New@-[FI] @K[Identical] New> assumption stand,
by declaring that,
in this situation, @T<A@K[IFF]B> is not new.
But this feels wrong -- that statement @i[does] say something about @T<A>:
if we later hear that @T<@K[Not]B>, we will now be able to infer that
@T<@K[Not]A>, a conclusion which would not have followed @i[sans] that sentence.
That is, a sentence "feels" new if it means that other future
statements will in fact limit the possible interpretations of @T<A>.

However we feel that @T<A@K[IFF]B> should be considered new in this situation,
as it says something about @T<A>:
if we later hear that @T<@K[Not]B>, we will now be able to infer that @T<@K[Not]A>,
a conclusion which would not have followed @i[sans] that sentence.
That is, a sentence "feels" new if it means that other future
statements will in fact limit the possible interpretations of @T<A>.

So there are (at least) two ways a statement can be new:
@Label[Analysis]
@BEGIN[ITEM1]
if it, now, limits the number of interpretations of @T<A>; or

if it establishes a dependency of @T<A> on some other symbols (objects?),
which may later lead to such a reduction (of the above type).
@END[ITEM1]

Realize the @T<New@-[FI]> does adequately cover the first case, but fails to handle
the second situation.
That is,
@BEGIN[Equation]
@Tag[NewFIToNew]
New@-[FI]( T, c, @g[s] ) @K[Implies] New( T, c, @g[s] ).
@END[Equation]
How can we enhance our definition of novelty to include cases like this one?
The idea of using a semantic constraint seems on the right track
-- in terms of limiting the number of interpretations.
The challenge now is dealing with the second situation.

*** Should this section be eliminated? ***
@Section[Conjecture #3: Other Symbols]
@Label[PartitionRed]

@BoldWord[Intuition]
We need a more general, semantic notion.
Thus far we have not considered any other symbol in the language;
perhaps we can get some leverage by considering them as well?

We mentioned above that, by asserting @T<A@K[Iff]B>,
we can think of @T<A> as "depending" on @T<B>.
What does it mean for (the interpretation of) one symbol to depend
(on the assignment) of other symbols?
One answer begins by splitting @F1[L]'s symbols into two camps --
the symbol @T<A>, versus all the others.

We will say that @T<I@-[1]> and @T<I@-[2]>,
a pair of interpretations of the theory @T<Th>,
are equivalent modulo @T<A>,
written @T<I@-[1]@K[ApproxEq]@+[A] I@-[2]>,
if their interpretations match for every symbol
@****** Or only for interpretations of CONSTANT symbols?
In the propositional case there are only constants, of course... *****@*
(possibly) excluding @T<A>.
That is,
@BEGIN[Conjecture]
I@-[1]@K[ApproxEq]@+[A] I@-[2] @K[IFF]
     @K[ForAll] x. x@K[Eq]"A" @K[Or] I@-[1][x] @K[Eq] I@-[2][x].
@END[Conjecture]
The unified variable "@T<x>" above varies over symbols of @F1[L],
the language of @T<Th>.

Observe now that
@T<@K[ApproxEq]@+[A]> forms an equivalence class, as
it is clearly reflexive, symmetric and transitive.
(I.e.
@BEGIN[Equation, Spacing = 1.5]
@Tag[EquivDefn]
I@K[ApproxEq]@+[A] I
I@-[1]@K[ApproxEq]@+[A] I@-[2] @K[Implies] I@-[2]@K[ApproxEq]@+[A] I@-[1]
(I@-[1]@K[ApproxEq]@+[A] I@-[2]) @K[AND] (I@-[2]@K[ApproxEq]@+[A] I@-[3]) @~
@K[Implies] I@-[1]@K[ApproxEq]@+[A] I@-[3]
@END[Equation])

That is, partition @T<@F1[I]@-[Th]> into a set of these classes,
each member denoted by
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]>,
where
@BEGIN[Equation]
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket] @K[Eq] @~
@K[LeftBrace]I@K[MemberOf]@F1[I]@-[Th] @K[Modulo] @~
@K[ForAll] x. x@K[Eq]"A" @K[Or] I[x] @K[Eq] I@-[j][x]@K[RightBrace]@Foot{
Note: (1) The unified variable "@T<x>" above varies over symbols of @F1[L],
the language of @T<Th>.
(2) we will again omit the @T<Th> subscript when unambiguous.
(3) We can regard the earlier @Ref[G#2] as the dual of this partition:
While it lumped together those interpretations which shared the same
mapping of @T<A>, this division lumps together those interpretations
which agree on everything but @T<A>.}.
@END[Equation]

*** From V1.MSS ***
@BoldWord[Conjecture #3: Other Symbols ]
@Label[PartitionRed]
What does it mean for (the interpretation of) one symbol to depend
(on the assignment) of other symbols?
One answer begins by splitting @F1[L]'s symbols into two camps --
the symbol @T<A>, versus all the others.
That is, partition @T<@F1[I]@-[Th]> into a set of these classes,
each member denoted by
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]>,
where
@BEGIN[Equation]
@TAG[DefnPart]
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket] @K[Eq] @~
@K[LeftBrace]I@K[MemberOf]@F1[I]@-[Th] @K[Modulo] @~
@K[ForAll]x. x@K[Eq]"A" @K[Or] I[x]@K[Eq]I@-[j][x]@K[RightBrace]@Foot{
Notes: (1) The unified variable "@T<x>" above varies over symbols of @F1[L],
the language of @T<Th>.
(2) The @T<Th> subscript will be omitted when it is unambiguous.
(3)
This partition is, in some sense, the dual of the earlier @Ref[G#2]:
While it lumped together those interpretations which shared the same
mapping of @T<A>, this division lumps together those interpretations
which agree on everything but @T<A>.}.
@END[Equation]

To illustrate the following points, let's re-consider 
Case @Ref[AorB],
the situation
on page @PageRef[AorB],
where @T<Th2 @K[Gets] @K[LeftBrace] A@K[Or]B @K[RightBrace]>.
For pedagogic reasons, we will represent it using Tableau @Ref[AB-Interp], below.
@BEGIN[Tableau]
@BEGIN[Verbatim]
@tabclear
         A
     @↑  @un[F   @↑T]  @↑
@\@ux[@& @\@& @\ ]
   @u[F]@\| @K[Dash]@\I@-[0]@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]
B   @\|@\@\|
   @u[T]@\| I@-[2]@\I@-[1]@\| @K[Scurve] @~
@K[LeftDoubleBracket]I@!@+[1]@/@-[Th]@K[RightDoubleBracket]
@\@ux[|@& @\@& @\|]
@Caption(Interpretations Allowed for @T<Th2@K[Gets]@K[LeftBrace]A@K[Or]B@K[Rightbrace]>)
@TAG[AB-Interp]
@END[Verbatim]
@END[Tableau]

In general, each row is indexed by an assignment to all the @i[M-1] symbols
of @T<@F1[L]@K[MinusSign]@K[LeftBrace]A@K[RightBrace]>,
and each column by an assignment to @T<A>.
Hence each @T[<i,j>] position corresponds to the interpretation which assigns
the i@+[th] possible value to @T<A>, and
the j@+[th] @i[M-1]-tuple of values to the other symbols.
Here we see that interpretation @T<I@-[2]> assigns @u[F] to the symbol @T<A>,
and @u[T] to @T<B>.
The vacancy in the upper lefthand corner shows there is no interpretation 
which maps @T<A> to @u[F] and @T<B> to @u[F], as @T<Th2> forbids that.

Realize that this representation explicates the partitions --
they are just the rows of the above tableau!
As shown above,
@T<@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket] @K[Eq]
@K[LeftBrace]I@-[0]@K[RightBrace]>, and
@T<@K[LeftDoubleBracket]I@!@+[1]@/@-[Th]@K[RightDoubleBracket] @K[Eq]
@K[LeftBrace]I@-[1],I@-[2]@K[RightBrace]>.

What will happen when an independent @T<@g[s]> is added to @T<Th>?
As shown above, some interpretations @T<I@-[i]> will be eliminated.
What does this imply about the set of equivalence classes,
@T<@K[LeftBrace]@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]@~
@K[RightBrace]@-[j]>,
as compared to
@T<@K[LeftBrace]@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket]@~
@K[RightBrace]@-[j]>?

Note that each (uneliminated) @T<I@-[i]> will remain in the same partition,
for any additional sentence @g[s].
Hence, for each j,
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket] @K[SubSet]
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]>.

*** From V1.MSS ***
What will happen when an independent @T<@g[s]> is added to @T<Th>?
As shown above, some interpretations @T<I@-[i]> will be eliminated.
What does this imply about the set of equivalence classes,
@T<@K[LeftBrace]@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]@~
@K[RightBrace]@-[j]>,
as compared to
@T<@K[LeftBrace]@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket]@~
@K[RightBrace]@-[j]>?
First, note that each (uneliminated) @T<I@-[i]> will remain in the same partition,
for any additional sentence @g[s].
Hence, for each j,
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket] @K[SubSet]
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]>.
Now consider just those partitions which lose some members --
those j such that
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket] @K[NEq]
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]>.
What would happen if each one of these "vanished" --
i.e. if @T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket] @K[Eq]
@K[LeftBrace]@K[RightBrace]>, for all such j?
That would mean that every paritition either retained all of its members,
or none of them.
At first glance@Foot{
And only at first glance...
This turns out to be slightly wrong ,
but is retained for illustrative purposes.}
it appears that all of @T<A>'s possible assignments are unaffected.
Stated loosely,
the only change to the "ambiguity" of the theory is in terms of 
which partition to choose,
and NOT in terms of which assignment to @T<A> to select, within any partition.

These leads us to propose
@BEGIN[Conjecture]
@Tag[G#3]
New@-[Part]( Th, A, @g[s] ) @K[IFF] @~
@K[Exists] j. @K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket] @~
@K[ProperSuperSet] @K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket] @~
@K[And] @~
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket] @K[NEq] @~
@K[LeftBrace]@K[RightBrace]
@END[Conjecture]
That is, @g[s] is a new statement about @T<A> if
there is some row of this table which @+[(1)]has at least one new 0,
and which @+[(2)] does not thereby become all 0.@Foot{
Note that the @T<New@-[FI]> conditions corresponds to seeing that one column of
this table has become all 0.}

Using this definition, we see that
@BEGIN[Equation]
@tabclear
@K[Not]@↑New@-[Part]( Th2, A, "B" )
@\New@-[Part]( Th2, A, "A@K[IFF]B" )
@K[Not]New@-[Part]( Th2, A, "A@K[Implies]B" ),
@END[Equation]
all as desired.

Take, for example, the sentence @T<A@K[IFF]B>
which eliminates @T<I@-[0]> and @T<I@-[2]>.
This means
@T<@K[LeftDoubleBracket]I@!@+[0]@/@-[Th']@K[RightDoubleBracket] @K[Eq]
@K[LeftBrace]@K[RightBrace]>, and
@T<@K[LeftDoubleBracket]I@!@+[1]@/@-[Th']@K[RightDoubleBracket] @K[Eq]
@K[LeftBrace]I@-[1]@K[RightBrace]>.

Consider now those partitions which lose some members --
those j such that
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket] @K[NEq]
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]>.
What would happen if each one of these "vanished" --
i.e. if @T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket] @K[Eq]
@K[LeftBrace]@K[RightBrace]>, for all such j?
That would mean that every paritition either retained all of its members,
or none of them.
At first glance@Foot{
And only at first glance...
This turns out to be slightly wrong ,
but is retained for illustrative purposes.}
it appears that all of @T<A>'s possible assignments are unaffected.
Stated loosely,
the only change to the "ambiguity" of the theory is in terms of 
which partition to choose,
and NOT in terms of which assignment to @T<A> to select, within any partition.

@BoldWord[Conjecture]
We therefore conjecture that this condition,
@BEGIN[Equation]
@Tag[E3]
@K[ForAll] j. @K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket] @K[Eq] @~
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket] @K[Or] @~
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket] @K[Eq] @~
@K[LeftBrace]@K[RightBrace]
@END[Equation]
implies that @g[s] conveys no new information about @T<A>.
Alternatively, we will claim that @g[s] is a new statement about @T<A> if
there is some row of this table which @+[(1)]has at least one new 0,
and which @+[(2)] does not thereby become all 0.
That is,
we propose
@BEGIN[Conjecture]
@Tag[G#3]
New@-[Part]( Th, A, @g[s] ) @K[IFF] @~
@K[Exists] j. @K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket] @~
@K[ProperSuperSet] @K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket] @~
@K[And] @~
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket] @K[NEq] @~
@K[LeftBrace]@K[RightBrace]
@END[Conjecture]
That is, @g[s] is a new statement about @T<A> if
there is some row of this table which @+[(1)]has at least one new 0,
and which @+[(2)] does not thereby become all 0.@Foot{
Note that the @T<New@-[FI]> conditions corresponds to seeing that one column of
this table has become all 0.}

The sentence @T<B> provides an example of this process.
This wipes out the entire first row, leaving the second totally intact.
That is
@T<@K[LeftDoubleBracket]I@!@+[1]@/@-[Th']@K[RightDoubleBracket]@K[Eq]@~
@K[LeftDoubleBracket]I@!@+[1]@/@-[Th]@K[RightDoubleBracket]@K[Eq]@~
@K[LeftBrace]I@-[1],I@-[2]@K[RightBrace]>,
and 
@T<@K[LeftDoubleBracket]I@!@+[0]@/@-[Th']@K[RightDoubleBracket]@K[Eq]@~
@K[LeftBrace]@K[RightBrace]>.
This reinforces our suspicion that 
this condition implies nothing new was learned about @T<A>,
as this sentence indeed says nothing about @T<A>.

On the other hand, recall the @T<A@K[IFF]B> example from above.
There was some 
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket]>
which was reduced, but not to @K[LeftBrace]@K[RightBrace]:
namely
@T<@K[LeftDoubleBracket]I@!@+[1]@/@-[Th']@K[RightDoubleBracket]>,
which went from
@T<@K[LeftBrace]I@-[1],I@-[2]@K[RightBrace]> to
@T<@K[LeftBrace]I@-[1]@K[RightBrace]>.
... and sure enough, we do believe that this sentence did relay some
new information about @T<A>.

As further reinforcement, recall the @T<(@K[Not]A)@K[Or]B> case mentioned above,
on page @PageRef[AorB].
Working through this example, it is found NOT to be an new statement, as desired.

Using this definition, we see that
@BEGIN[Equation]
@tabclear
@K[Not]@↑New@-[Part]( Th2, A, "B" )
@\New@-[Part]( Th2, A, "A@K[IFF]B" )
@K[Not]New@-[Part]( Th2, A, "(@K[Not]A)@K[Or]B" )
@END[Equation]
all as desired.

@BoldWord[Failings]
We saw above that @T<New@-[Part]> includes cases of newness which 
@T<New@-[FI]> misses.
However, the containment is not total --
@T<New@-[FI]> covers a few cases which 
@T<New@-[Part]> does not.
Using the @T<Th2> theory from example above,
consider adding in the independent statement @T<@K[Not]B>.
The second partition,
@T<@K[LeftDoubleBracket]I@!@+[1]@/@-[Th']@K[RightDoubleBracket]>,
vanishes, and the other is unaffected.
Hence @Ref[G#3] would imply that this statement is not new, wrt @T<A> --
we find @T<@K[Not]New@-[Part](Th2,A,"@K[Not]B")>,
which is wrong, as @T<A> is now forced to equal
(read "be assigned to") @u<T>,
whereas it was unrestricted before.

As @T<New@-[FI]> does the right thing here
(i.e. @T<New@-[FI](Th2,A,"@K[Not]B")>,)
we could simply just conjoin the two cases,
arguing that there are really two types of "new"ness going on here,
of exactly the flavors mentioned on page @PageRef[Analysis] above.
Unfortunately there are yet other problems.
Consider 
@CaseForm[Case 4: A <IFF> C, when A <IFF> B]
Take
@T<Th4 @K[Gets] @K[LeftBrace] A@K[IFF]B @K[RightBrace]>,
where @T<@F1[L] @K[Eq] @K[LeftBrace]A,B,C@K[RightBrace]>.
Tableau @Ref[AiffB-Tableau#1] below describes this situation.
@BEGIN[Tableau]
@BEGIN[Verbatim,Spacing=1.5]
@tabclear
                 A
             @↑  @un[F   @↑T]  @↑
        @\@ux[@& @\@& @\ ]
      @un[F,F]@\| I@-[0]@\@K[Dash]@\| @K[Scurve] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]
<B,C> @un[F,T]@\| I@-[1]@\@K[Dash]@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[1]@/@-[Th]@K[RightDoubleBracket]
      @un[T,F]@\| @K[Dash]@\I@-[5]@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[5]@/@-[Th]@K[RightDoubleBracket]
      @un[T,T]@\| @K[Dash]@\I@-[7]@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[7]@/@-[Th]@K[RightDoubleBracket]
        @\@ux[|@& @\@& @\|]
@Caption(Full Interpretations for @T<Th4@K[Gets]@K[LeftBrace]A@K[IFF]B@K[RightBrace]>)
@TAG[AiffB-Tableau#1]
@END[Verbatim]
@END[Tableau]
Now add a proportedly new sentence, @T<A@K[IFF]C>.
Certainly this should be new --
we agreed that sentences of this form should count back in the @T<Th3> case.
see Case @Ref[AisB],
Only here it appears not to: 
The second and third rows 
(@T<@K[LeftDoubleBracket]I@!@+[1]@/@-[Th]@K[RightDoubleBracket]> and
@T<@K[LeftDoubleBracket]I@!@+[5]@/@-[Th]@K[RightDoubleBracket]>)
vanish, and the other two remain unchanged!
Something is wrong!

@BoldWord[Analysis]
The problem lies in our inability to express the fact that @T<A>
here depends on @T<C>, without confusing this with the known
already dependency on @T<B>.

**** from LONG *****
Each row of
@Ref[AiffB-Tableau#1]
is indexed by a particular assignment to the 2-tuple, @T[<B C>].
In general, each row of each @Ref[G#3]-type tableau describes
an assignment to all the constant symbols of @F1[L]
except the symbol in question, (here @T<A>).

Each row of these new type of tableau
likewise describes a particular assignment of an n-tuple of symbols.
Here, however, the @i[n]-tuple need not be the full set of @i{M-1} symbols,
where @F1[L] has @i{M} constant symbols.
That is, a row can be indexed by an assignment to only one constant,
or even zero constants.
The columns remain indexed by an assignment to single constant, @T<A>.
Consider Tableau @Ref[AiffB-Tableau#1] below,
which corresponds to the earlier @Ref[AiffB-Tableau#1]:

@BEGIN[Tableau]
@BEGIN[Verbatim, Spacing=1.1]
@tabclear
                A
            @↑  @un[F   @↑T]  @↑
         @\@ux[@& @\@& @\ ]
     @un[F,F]@\| 1@\0@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B,C@K[RightBrace]}
     @un[F,T]@\| 1@\0@\| @K[SCurve] @K[LeftArrow] @~
@K[LeftDoubleBracket]I@!@+[2]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B,C@K[RightBrace]}
     @un[T,F]@\| 0@\1@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[5]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B,C@K[RightBrace]}
     @un[T,T]@\| 0@\1@\| @K[SCurve] @K[LeftArrow] @~
@K[LeftDoubleBracket]I@!@+[7]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B,C@K[RightBrace]}
<B,C>    @\|@\@\|
     @un[-,F]@\| 1@\1@\| @K[SCurve] @K[LeftArrow] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B@K[RightBrace]}
     @un[-,F]@\| 1@\1@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[5]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B@K[RightBrace]}
     @un[F,-]@\| 1@\0@\| @K[SCurve] @K[LeftArrow] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]C@K[RightBrace]}
     @un[T,-]@\| 0@\1@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[2]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]C@K[RightBrace]}
         @\|@\@\|
     @un[-,-]@\| 1@\1@\| @K[SCurve] @K[LeftArrow] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]@K[RightBrace]}
         @\@ux[|@& @\@& @\|]
@Caption[Partial Interpretations for "A IFF B"]
@TAG[AiffB-Tablea#2]
@END[Verbatim]
@END[Tableau]
The "@T[-]"s above should be read as "don't care".  
Hence the assignment of "@un[-,F]" to @T{<B,C>} means that @T<C> must be @u[F],
and @T<B> can be any value.
We will explain the funny
@T<@K[LeftDoubleBracket]I@!@+[2]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]C@K[RightBrace]}>
in a moment.

The other difference from the @Ref[G#3]-type tableau
is how the positions are labelled.
There each position corresponded to
a "full interpretation", which assigned a referent to each symbol of @F1[L]
(@i{M-1} of the symbols are assigned according to the row involved,
and the @i{M}@+[th] constant, @T<A>, by virtue of its column).
In these new tableaus each position corresponds to a partial interpretation,
which assigns referents to only some of the @i{M} symbols of @F1[L].
It may, therefore, refer to several different full interpretations.
So while we were able to use the name of the associated interpretation
in the @Ref[G#3] tableaus, 
here we only tag each position here, with a
"1" if there is one (or more) interpretations associated with this position,
and a "0" otherwise.
That is, the particular names of the
interpretations, and even the number of them,
is unnecessary;
the only essential feature is whether there are any assignment to @T<A>
consistent with the assignments of the other variables.@Foot{
Realize these new tableaus can be constructed directly from the
corresponding one of the earlier type, by 
@T<OR>ing together the relevant rows of the original tableau.}

Returning to this example,
we derive the tableau below after adding in @T<A@K[IFF]C>:
@BEGIN[Tableau]
@BEGIN[Verbatim]
@tabclear
                A
            @↑  @un[F   @↑T]  @↑
         @\@ux[@& @\@& @\]
     @un[F,F]@\| 1@\0@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B,C@K[RightBrace]}
     @un[F,T]@\| 0@\0@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[2]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B,C@K[RightBrace]}
     @un[T,F]@\| 0@\0@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[5]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B,C@K[RightBrace]}
     @un[T,T]@\| 0@\1@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[7]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B,C@K[RightBrace]}
<B,C>    @\|@\@\|
     @un[-,F]@\| 1@\0@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B@K[RightBrace]}
     @un[-,F]@\| 0@\1@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[5]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B@K[RightBrace]}
     @un[F,-]@\| 1@\0@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]C@K[RightBrace]}
     @un[T,-]@\| 0@\1@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[2]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]C@K[RightBrace]}
         @\|@\@\|
     @un[-,-]@\| 1@\1@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]@K[RightBrace]}
         @\|@ux[@& @\    |]
@Caption[Partial Interpretations for "A IFF B", "A IFF C"]
@Tag[AfterAiffC]
@END[Verbatim]
@END[Tableau]

Note that each of the final two row now have one fewer "1",
parallelling @Ref[G#3], and leading to the following conjecture:

@Section[Conjecture #4: Partial Interpretations]
@Label[SuperDepend]

@Comment[ @BoldWord[Intuition]
The intuitions expressed above -- with respect to dependencies --
are correct.
The problem is we took too big a jump,
naively partitioning the set of non-@T<A> symbols into a single,
unstructured mass.
What we need is a way to deal with the particular dependency on @T<C>
induced by @T<A@K[Iff]C>,
in a way which ignores other dependencies @T<A> may have.
In general, we have to see if @T<A> depends on the assignment of
a single constant first, 
(that is, of the form "@T<A> depends on @T<C>",
which happens when, for example, @T<A@K[IFF]C> is asserted,)
then if @T<A> depends on a pair of constants,
(consider the assertion @T<A@K[IFF](B@K[IFF]C)>@Foot{
That is, choose any assignment to @T<B>,
and note that @T<A> is still "arbitrary" 
-- that is, it could be either @u<T> or @u<F>, depending on @T<C>.
However, if both @T<B> and @T<C> are fixed,
then @T<A> is fully determined.},)
and then all triples, 4-tuples, etc.

We need some additional notation before we can specify this formally.
Begin by fixing some subset of the symbols,
@T<@g[x]@K[SubSet] @F1[L]@K[MinusSign]@K[LeftBrace]A@K[RightBrace]>.
With respect to this set, we can partition the set of interpretations,
@F1<I@-[Th]>, in equivalence classes, based on agreement on the referents
of those symbols.
That is,
@BEGIN[Equation]
@Tag[FancyPart]
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]@-{@g[x]} @K[Eq] @~
@K[LeftBrace]I@K[MemberOf]@F1[I]@-[Th] @K[Modulo] @~
@K[ForAll] x@K[MemberOf]@g[x]. I[x]@K[Eq]I@-[j][x] @K[RightBrace]
@END[Equation]
(Note that the earlier
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]>
was just the same as
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]@-{@F1[L]-A}>.)
We need one other defintion:
@BEGIN[Equation]
@Tag[FP-A]
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]@-{@g[x]}(A) @K[Eq] @~
@K[LeftBrace] I[A] @K[Modulo] I @K[MemberOf] @~
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]@-{@g[x]} @K[RightBrace].
@END[Equation]

With this notation, we can express our final specification for newness:
@BEGIN[Conjecture, Spacing=1.2]
@Tag[G#4]
@Tabclear
New@-[PI]( Th, A, @g[s]) @K[IFF] 
   @K[Exists] @g[x] @K[SubSet] @F1[L]@K[MinusSign]@K[LeftBrace]A@K[RightBrace],j. @~
(@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]@-{@g[x]}(A)@~
@K[ProperSuperSet]@~
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket]@-{@g[x]}(A)) @~
@K[And] @~
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket]@-{@g[x]}(A)@K[Neq]@~
@K[LeftBrace]@K[RightBrace]
@END[Conjecture]

Notice that we did not need a special case for to handle
the @T<New@-[FI](Th,A,@g[s])> condition.
This follow from the fact that
@T<@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@~
@-{@K[LeftBrace]@K[RightBrace]}(A)> partition is, in fact, equal to
@T<@F1[I]@-[Th][A]>.@Foot{
***SEE IJCAI***
and hence that @T<Th'> is inconsistent, we are not worried...}

*** from V1 ***
What we need is a way to deal with the particular dependency on @T<C>
induced by @T<A@K[Iff]C>,
in a way which ignores other dependencies @T<A> may have.
In general, we have to see if @T<A> depends on the assignment of
a single constant first, 
(that is, of the form "@T<A> depends on @T<C>",
which happens when, for example, @T<A@K[IFF]C> is asserted,)
then if @T<A> depends on a pair of constants,
(consider the assertion @T<A@K[IFF](B@K[IFF]C)>@Foot{
That is, choose any assignment to @T<B>,
and note that @T<A> is still "arbitrary" 
-- that is, it could be either @u<T> or @u<F>, depending on @T<C>.
However, if both @T<B> and @T<C> are fixed,
then @T<A> is fully determined.},)
and then all triples, 4-tuples, etc.

We need some additional notation before we can specify this formally.
Begin by fixing some subset of the symbols,
@T<@g[x]@K[SubSet] @F1[L]@K[MinusSign]@K[LeftBrace]A@K[RightBrace]>.
With respect to this set, we can partition the set of interpretations,
@F1<I@-[Th]>, in equivalence classes, based on agreement on the referents
of those symbols.
That is,
@BEGIN[Equation]
@Tag[FancyPart]
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]@-{@g[x]} @K[Eq] @~
@K[LeftBrace]I@K[MemberOf]@F1[I]@-[Th] @K[Modulo] @~
@K[ForAll] x@K[MemberOf]@g[x]. I[x]@K[Eq]I@-[j][x] @K[RightBrace]
@END[Equation]
(Note that the earlier
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]>
was just the same as
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]@-{@F1[L]-A}>.)
We need one other defintion:
@BEGIN[Equation]
@Tag[FP-A]
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]@-{@g[x]}(A) @K[Eq] @~
@K[LeftBrace] I[A] @K[Modulo] I @K[MemberOf] @~
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]@-{@g[x]} @K[RightBrace].
@END[Equation]

With this notation, we can express our final specification for newness:
@BEGIN[Conjecture, Spacing=1.2]
@Tag[G#4]
@Tabclear
New@-[PI]( Th, A, @g[s]) @K[IFF] 
   @K[Exists] @g[x] @K[SubSet] @F1[L]@K[MinusSign]@K[LeftBrace]A@K[RightBrace],j. @~
(@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]@-{@g[x]}(A)@~
@K[ProperSuperSet]@~
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket]@-{@g[x]}(A)) @~
@K[And] @~
@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket]@-{@g[x]}(A)@K[Neq]@~
@K[LeftBrace]@K[RightBrace]
@END[Conjecture]

This, too, can be visualized with a tableau --
see Tableau @Ref[AiffB-Tableau#2] below,
which corresponds to the earlier Tableau @Ref[AiffB-Tableau#1].
Here the rows are indexed by all legal assignments to all possible
@T<@g[x]@K[SubSet](@F1[L]@K[MinusSign]A)>,
not just assignment to the full set of @i[M-1] symbols,
as it was for @Ref[G#3]-type tableaus.
That is, a row can be indexed by an assignment to fewer constants --
e.g. just one constant, or even zero constants.
The columns remain indexed by an assignment to single constant, @T<A>.
Each of these partitions corresponds to a row of the extended tableau.
There is a "1" in each column which is indexed by a member of the set
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]@-(@g[x])(A)>.

Consider Tableau @Ref[AiffB-Tableau#2] below,
which corresponds to the earlier Tableau @Ref[AiffB-Tableau#1].
The "@T[-]"s there should be read as "don't care".  
Hence the assignment of "@un[-,F]" to @T{<B,C>} means that @T<C> must be @u[F],
and @T<B> can be any value.
@BEGIN[Tableau]
@BEGIN[Verbatim, Spacing=1.1]
@tabclear
                A
            @↑  @un[F   @↑T]  @↑
         @\@ux[@& @\@& @\ ]
     @un[F,F]@\| 1@\0@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B,C@K[RightBrace]}
     @un[F,T]@\| 1@\0@\| @K[SCurve]   @K[LeftArrow]   @K[LeftArrow] @~
@K[LeftDoubleBracket]I@!@+[2]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B,C@K[RightBrace]}
     @un[T,F]@\| 0@\1@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[5]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B,C@K[RightBrace]}
     @un[T,T]@\| 0@\1@\| @K[SCurve]   @K[LeftArrow]   @K[LeftArrow] @~
@K[LeftDoubleBracket]I@!@+[7]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B,C@K[RightBrace]}
<B,C>    @\|@\@\|
     @un[-,F]@\| 1@\1@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B@K[RightBrace]}
     @un[-,T]@\| 1@\1@\| @K[SCurve]   @K[LeftArrow]   @K[LeftArrow] @~
@K[LeftDoubleBracket]I@!@+[5]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B@K[RightBrace]}
     @un[F,-]@\| 1@\0@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]C@K[RightBrace]}
     @un[T,-]@\| 0@\1@\| @K[SCurve]   @K[LeftArrow]   @K[LeftArrow] @~
@K[LeftDoubleBracket]I@!@+[2]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]C@K[RightBrace]}
         @\|@\@\|
     @un[-,-]@\| 1@\1@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]@K[RightBrace]}
         @\@ux[|@& @\@& @\|]
@Caption(Partial Interpretations for @T<A@K[IFF]B>)
@TAG[AiffB-Tableau#2]
@END[Verbatim]
@END[Tableau]
The other difference from the @Ref[G#3]-type tableau
is how the positions are labelled.
There each position corresponded to
a "full interpretation", which assigned a referent to each symbol of @F1[L]
(@i[M-1] of the symbols are assigned according to the row involved,
and the @i[M]@+[th] constant, @T<A>, by virtue of its column).
In these new tableaus each position corresponds to a partial interpretation,
which assigns referents to only some of the @i[M] symbols of @F1[L].
It may, therefore, refer to several different full interpretations.
So while we were able to use the name of the associated interpretation
in the @Ref[G#3] tableaus, 
Here we only tag each position here, with a
"1" if there is one (or more) interpretations associated with this position,
and a "0" otherwise.
That is, the particular names of the
interpretations, and even the number of them,
is unnecessary;
the only essential feature is whether there are any assignment to @T<A>
consistent with the assignments of the other variables.
This is the difference between dealing with
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]@-{@g[x]}>
and
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th]@K[RightDoubleBracket]@-{@g[x]}(A)>.@Foot{
Realize these new tableaus can be constructed directly from the
corresponding one of the earlier type, by 
@T<OR>ing together the relevant rows of the original tableau.}

@BoldWord[Conjecture]
Suppose that,
as the result of adding in @g[s],
there is now some row, j, of this augmented tableau which looses
one or more "1", but which is not set to all "0"s.
(Realize that several interpretations may have to be invalidated
to achieve a single "0", as that "1" may have represented more than
one interpretation.)
Based on the arguments presented above,
this means that the assignment to @T<A> associated with this column
is now dependent on the assignment of the variables associated with this row.
Hence, this condition implies that @g[s] must have been new.

*** From LONG ***
Thanks to the last row of the tableau,
corresponding to @i{M-1} "don't care"s,
the @T<New@-[FI](Th,A,@g[s])> situation is no longer a special case,
as it was for @Ref[G#3].
That is, 
a column will have a "1" only if there is
some interpretation which assigns that value to @T<A>.
So,

@BEGIN[Conjecture]
@Tag[G#4-Informal]
@Tabclear
New@-[PI]( Th, A, @g[s] ) @K[IFF] 
      @K[Exists] @↑j @K[MemberOf] "rows in tableau". 
   @\("j loses at least one `1'") @K[And] ("j does not vanish")
@END[Conjecture]

Returning to this example,
we derive Tableau @Ref[AfterAiffC], below, after adding in @T<A@K[IFF]C>.
@BEGIN[Tableau]
@BEGIN[Verbatim]
@tabclear
                A
            @↑  @un[F   @↑T]  @↑
         @\@ux[@& @\@& @\ ]
     @un[F,F]@\| 1@\0@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B,C@K[RightBrace]}
     @un[F,T]@\| 0@\0@\| @K[SCurve]  @K[LeftArrow]      @K[LeftArrow] @~
@K[LeftDoubleBracket]I@!@+[2]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B,C@K[RightBrace]}
     @un[T,F]@\| 0@\0@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[5]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B,C@K[RightBrace]}
     @un[T,T]@\| 0@\1@\| @K[SCurve]  @K[LeftArrow]      @K[LeftArrow] @~
@K[LeftDoubleBracket]I@!@+[7]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B,C@K[RightBrace]}
<B,C>    @\|@\@\|
     @un[-,F]@\| 1@\0@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B@K[RightBrace]}
     @un[-,T]@\| 0@\1@\| @K[SCurve]   @K[LeftArrow]   @K[LeftArrow] @~
@K[LeftDoubleBracket]I@!@+[5]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]B@K[RightBrace]}
     @un[F,-]@\| 1@\0@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]C@K[RightBrace]}
     @un[T,-]@\| 0@\1@\| @K[SCurve]   @K[LeftArrow]   @K[LeftArrow] @~
@K[LeftDoubleBracket]I@!@+[2]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]C@K[RightBrace]}
         @\|@\@\|
     @un[-,-]@\| 1@\1@\| @K[SCurve] @~
@K[LeftDoubleBracket]I@!@+[0]@/@-[Th]@K[RightDoubleBracket]@-{@K[LeftBrace]@K[RightBrace]}
         @\@ux[|@& @\@& @\|]
@Caption(Partial Interpretations for @T<@K[LeftBrace]A@K[IFF]B, A@K[IFF]C@K[RightBrace]>)
@Tag[AfterAiffC]
@END[Verbatim]
@END[Tableau]
Note that each of the @un[-,T] and @un[-,F] rows
now have one fewer "1",
which says that this @T<A@K[IFF]C> is indeed @T<New@-[PI]>,
according to @Ref[G#4].

@NoIndent()(Note absense of failing)
@Section[Analysis]
@Label[Analysis4]

Our basic thesis is that
@g[s] is be a new (definition) fact about @T<A>, with respect to the theory @T<Th>,
if, @i{under some set of circumstances,
it will limit the number of interpretations possible for this symbol}.
We have clearly achieved this,
by exhaustively enumerating every possible context in which @T<A>
might find itself dependent (including the "null dependency"),
and seeing if @T<A> loses a possible interpretation under that situation.

Our definition of context is as general as it possible --
it includes every assignment to each and every n-tuple of variables.
Furthermore, by counterexample, we have shown that this 
(ridiculous) generality is necesary.
Given the italized assumption above,
we have covered every situation,
and, given the counterexamples, done so as tightly as possible.

We will soon see there are yet other senses of
"new"ness which emerge in the enriched predicate calculus formalism.
... stay tuned.
@Chapter[Predicate Calculus Case]
@Label[PredCal]

@Section[How Similar to Propositional Case]
Everything above applies to new predicate case:
in particular, every proof (counterexample) still works.
(Proposition cases can be embedded in Predicate case --
hence every counterexample requiring embellishments above,
works here as well.)
Furthermore, the tableau method can be used when dealing with 
a relation symbol, not just constant.
As with constants, viewed as 0-ary relations,
we deal with the extention of the symbol, which is the characteristic set. 
Everything works as desired -- just end up with HUGE tables.

@Section[How Different from Propositional Case]

But not enough.
In predicate case, have more constants, ... also relations which cause problems.
We've been concerned only with constants,
which are either co-referential or not. (that is, equal)
When handling relations, can talk about overlap -- maybe a fuzzy set membership
(equality?)

Retain old definition of @T<New>, from propositional case;
but call it
@T<New@-[Prop]>, to differentiate it from more complex case.
(Clearly
@T<New@-[Prop] @K[SubSet] New>.)

Definitional vs Assertion
(not an issue when dealing with small universe of
@T<@K[LeftBrace]@ux[T,F]@K[RightBrace]>

Some new information, wrt A, is NOT in terms of pinning it down,
but, instead, is in terms of ascribing new properties to it.
(I.e. we had considered only definitional features; now for assertional.)
Can I learn anything new about A if I already have uniquely specified
the object to which it refers?
(That is, what if I do not know all of its features.)
E.g I know who MRG is; but not his hair color.
?1 Does it mean I "know" A?  Consider possible world approach.
[only possible if some things are aready tied down - othewise often some
inherent ambiguity.]

An approach: realize that what we are further specifying may be the relation
involved:  e.g. the binary hair-color relation in the prior situation.
(i.e. in one interpretation, it may have included the pair <MRG Black>,
ad <MRG Blonde> in another.)

@Section[Simplifying assumptions]
For the time being, we will make some major simplifications
(some we'll relax later, other we will argue do not constitute real restrictions,
?others we'll just live with? .)
@BEGIN[Itemize]

@BEGIN[Multiple]
@Tag[RealWorld]
The (model) universe, a.k.a. the set of "referents" 
-- those objects which are the range of the interpretations of the symbols --
will be fixed in addvance.
(When dealing with the predicate calculus formalism,)
the "range" of each instantiation,
(a.k.a. a model of @T<Th>,)
will include objects in the "real world",
as opposed to vague, theoretic symbols.
(This is contrasted with other notions of semantics,
where the "referent" is,
for example,
in the speaker's mind, or is based on socially shared conventions.)
Hence elements of the one model may overlap with those of another.

In particular, as before,
@T<I@-[i][A]> might equal @T<I@-[j][A]>, where @T<i@K[Neq]j>;
so the size of @T<@F1[I][A]> may be less than the size of @T<@F1[I]>.
@END[Multiple]

Also, certain symbols will have pre-defined referents.
This is necessary -- otherwise we would never be able to pin down anything!
(Proof: First, this is a problem (which we ignored) even in the propositional
case.  
That is, there are non-standard models, in which @T<T> does not denote @u[T]ruth.
@****** Is this true? are there such non-standard models? *****@*
Here, with relations, it is even more troublesome.
Even given a simply binary relation we could never know what it means --
try just reversing the order of arguments. ...)
Realize this is not too unreasonable an assumption:
it is certainly (considered) satisfied as a pre-condition of any communication
that certain things are understood.
(see Hermaneutic Circle, ...)
Also, certain mechanisms for confirmation,
using "Semantic Attachment" associated with vision, reasoning, ...
For now, we can assume there is some oracle which unambiguously assigns referents
(or referent sets) to symbols of the language.

We will also assume
the language has been fixed beforehand (before the communication act).
That is, we will NOT address the issue of new term introduction, (at first).

At all times we will assume the proof system used is complete
-- so @K[RightTurnstile] is equivalent to @K[DoubleTurnstile].
Furthermore, we'll consider every theory deductively closed.
(Yes, this means this description will not accurately
portray how people do (or should) work.
Of course everything here is just a first cut...)

@END[Itemize]

@Section[Defn of (Predicate Calculus) Newness]
First, need to consider earlier case -- tableau method.
Second, need to consider situation where some relation is refined.

Approach:
Given any relation symbol, @T<R>, consider @T<R@!@+[A]@/@-[i]>,
which is the relativation of @T<R> to @T<A>, on the @T<i>'s argument.
Hence
@T<HairColor@!@+[MRG]@/@-[1]> is a relation of one argument --
@BEGIN[Equation]
HairColor@!@+[MRG]@/@-[1](x) @K[Iff] HairColor(MRG,x)
@END[Equation]

Now construct the tableau for this new relation,
where each column is indexed with possible values for this unary relation,
and each row derives from partial instantiations of n-tuples of @F1[L],
excluding this new
@T<R@!@+[A]@/@-[i]>, the original (and here redundant and unneeded)
@T<R>, and the symbol we are really worried about, @T<A>.

Notes:
For unary relations, when relativized end up with a constant -- but
over what?
Need to augment @F1[U], the universe, with the "truth pair",
@T<@K[LeftBrace]@ux[T,F]@K[RightBrace]>.

----

Lemma: @g[s] is new about @T<A>, wrt @T<Th>, if
@T<New@-[Prop](Th,A,@g[s]) or 
@K[Exists] R @K[MemberOf] @F1[L], i.
@T<New@-[Prop](Th,R@!@+[A]@/@[i],@g[s]).
@Chapter[Connection to Syntactic Methods]
@Label[SemInSyn]
Connection to Proof theoretic techniques 
(which MRG called syntactic)
@Chapter[Conclusion]
@Label[Conclusion]

@Section[Observations]
@Label[Obs]

Before discussing the uses and limitations (read "pluses and minuses")
of this definition of novelty,
this section will discuss several of its interesting properties.

@BoldWord[Non Universality]
The clause that insists that the altered row in the tableau be non-0
seems depressingly inelegant.
It is, however, essential for newness to remain meaningful:
without the
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket]@-{@g[x]}(A)@K[Neq]@~
@K[LeftBrace]@K[RightBrace]> constraint,
every @T<@g[s]> would be @T<New@!'@\@-[Part]>.

It is possible to explain, "semantically", why it is necessary.
As we saw above, the @T<New@-[FI]> relation is almost adequate to convey novelty.
Its only fault was that it could not handle dependency --
we wanted to claim that certain sentences,
those which tied @T<A> down, allowing subsequent assertions to
affect its interpretative range,
were new.
<<here>>

The basic reason we discredited @T<New@-[FI]>
First, realize that, for any given @T<Th>, any under-determined @T<A>@Foot{
@T<A> is "fully determined" if its interpretative range, @T<@F1[I]@-[Th][A]>,
has exactly one member;
otherwise it is under-determined.},
and sentence @T<@g[s]@-[1]>, one can always
find a sentence @T<@g[s]@-[2]> such that
@T<New@-[FI](Th@K[PlusSign]@g[s]@-[2],A,@g[s]@-[1])>.
E.g. let @T<@g[s]@-[2]) be @T<@g[s]-[1] @K[Implies] A@K[Eq]@g[t]>,
where @T<@g[t]@K[MemberOf]@F1[I]@-[Th][A]>.

This means that, in the right circumstance, @i{any} sentence could be new!

where neither
@T<@g[s]@-[1]> nor @T<@g[s]@-[2]> is new with respect to @T<Th>,
but where
@T<@g[s]@-[1]> is new wrt @T<Th@K[PlusSign]@g[s]@-[2]>,
and @T<@g[s]@-[2]> is new wrt @T<Th@K[PlusSign]@g[s]@-[1]>@Foot{
Symbolically,
@T<New(Th@K[PlusSign]@g[s]@-[1],A,@g[s]@-[2])> and
@T<@K[Not]New(Th,A,@g[s]@-[1])> and
@T<@K[Not]New(Th,A,@g[s]@-[2])>.}
That is, the order in which these sentences are asserted makes a difference,
in terms of deciding which should be regarded as new.

To avoid that counter-intuitive possibility, 
we decided to constrain this newness relation, to cover only situations
where the novelty is "tied down" to @T<A>, or to some other symbol which
@T<A> is now known to depend on.
That is the role of the
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket]@-{@g[x]}(A)@K[Neq]@~
@K[LeftBrace]@K[RightBrace]> constraint,
without which any @g[s] would be @T<New@!'@\@-[Part]>.

*** from V1 ***

@BoldWord[Non Universality: ]
The clause that insists that the altered row in the tableau be non-0
seems depressingly inelegant.
It is, however, essential for newness to remain meaningful:
without it, every @T<@g[s]> would be @T<New@!'@/@-[Part]>.
@Cite[NewNess] explains why it is necessary,
and it captures the directionality of the "dependency links" 
which originate at the @T<A> term.

<<< this had been Non Commutative >>>
It is possible to explain, "semantically", why it is necessary.
As we saw above, the @T<New@-[FI]> relation is almost adequate to convey novelty.
Its only fault was that it could not handle dependency:
we wanted to claim that certain sentences,
those which tied @T<A> down
by allowing subsequent assertions to affect its interpretative range,
(@T<@F1[I]@-[Th][A]>,)
were new.
That is, we felt that @T<A@K[IFF]B> was new because, given this statement,
@T<@K[Not]B> forced @T<A> to be false, decreasing its
interpretative range.

If we are not careful that description would be vacuuous:
Given any @T<Th>, any unresolved @T<A>@Foot{
We say the symbol @T<A> is @i[unresolved] (wrt @T<Th>)
if there is any sentence @T<t> such that @T<New@-[FI](Th,A,@g[t])>.
and sentence @T<@g[s]@-[1]>, one can always
find a sentence @T<@g[s]@-[2]> such that
@T<New@-[FI](Th@K[PlusSign]@g[s]@-[2],A,@g[s]@-[1])>.
E.g. let @T<@g[s]@-[2]) be @T<@g[s]-[1] @K[Implies] @g[t]>,
for some @T<@g[t]> which satisfies @T<New@-[FI](Th,A,@g[t])>.
If we allow this, if any sentence was @T<New@-[FI]>, then
@i{every} sentence would be @T<New>!

To avoid that counter-intuitive possibility, 
we decided to constrain this newness relation, to cover only situations
where the novelty is "tied down" to @T<A>, or to some other symbol on which
@T<A> is now known to depend.
That is the role of the
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket]@-{⊂g[x]}(A)@K[Neq]@~
@K[LeftBrace]@K[RightBrace]> constraint,

@BoldWord[Non Commutative]
The clause that insists that the altered row in the tableau be non-0
seems depressingly inelegant.
It is, however, essential,
following from a necessary constraint to keep newness from being universal.
That is, for a given @T<Th> and any under-determined @T<A>@Foot{
@T<A> is "fully determined" if its interpretative range, @T<@F1[I]@-[Th][A]>,
has exactly one member;
otherwise it is under-determined.},
it is easy to find @T<@g[s]@-[1]>/@T<@g[s]@-[2]> pairs
where neither
@T<@g[s]@-[1]> nor @T<@g[s]@-[2]> is new with respect to @T<Th>,
but where
@T<@g[s]@-[1]> is new wrt @T<Th@K[PlusSign]@g[s]@-[2]>,
and @T<@g[s]@-[2]> is new wrt @T<Th@K[PlusSign]@g[s]@-[1]>@Foot{
Symbolically,
@T<New(Th@K[PlusSign]@g[s]@-[1],A,@g[s]@-[2])> and
@T<New(Th@K[PlusSign]@g[s]@-[2],A,@g[s]@-[1])>, while
@T<@K[Not]New(Th,A,@g[s]@-[1])> and
@T<@K[Not]New(Th,A,@g[s]@-[2])>.}
That is, the order in which these sentences are asserted makes a difference,
in terms of deciding which should be regarded as new.

In fact, for any sentence @T<@g[s]@-[1]>, we can cons us an appropriate
@T<@g[s]@-[2])> with that property.
E.g. let @T<@g[s]@-[2]) be @T<@g[s]-[1] @K[Implies] A@K[Eq]@g[t]>,
where @T<@g[t]@K[MemberOf]@F1[I]@-[Th][A]>.
This means that, in the right circumstance, @i{any} sentence could be new!

To avoid that counter-intuitive possibility, 
we decided to constrain this newness relation, to cover only situations
where the novelty is "tied down" to @T<A>, or to some other symbol which
@T<A> is now known to depend on.
That is the role of the
@T<@K[LeftDoubleBracket]I@!@+[j]@/@-[Th']@K[RightDoubleBracket]@-{@g[x]}(A)@K[Neq]@~
@K[LeftBrace]@K[RightBrace]> constraint,
without which any @g[s] would be @T<New@!'@\@-[Part]>.
@BoldWord[Applicability: ]
While it may not be obvious, the "definitional novelty" described above
is applicable to any symbol in predicate calculus as well as propositional logic.
In particular, the same formalism we saw work for constant symbols
will work adequately for relation symbols,
albeit with a yet even more huge tableau.
The tableau will become yet even more huge, as each column index will be a
possible characteristic set of the relation, which will be a set of n-tuples
infinite partitions, etc.
of the elements of the universe, as opposed to single elements.
Also, nothing forces the universe (which is the set of all possible referents)
to be finite -- that is, everything will follow through with infinite sets,
infinite partitions, etc.

@Section[Usefulness]
@Label[Uses]

This research was motivated by the authors' attempt to capture
what an "analogy" really is.
That is, as a result of the analogy communication
the hearer now knows 
something about one of the analogues which he did not know before.
So to formally understand what should constitute a legal analogy,
we had to define what it meant for a fact to be new.
This lead to ...
It soon became clear that analogy was intimately connected 
with the phenomenon of learning:


While working on learning issues.
It became apparent that this issue was quite relevant to
much of communication.  (See Grise and friends)
That is, consider how the hearer, H, can begin to comprehend why
the speaker, S, said some sentence @g[s].
Why did S say that?
In general his goal ...
S needs to model H's understanding of the world, as desire to say something new.

- e.g. to decide what to index a proposition on -- only those things
to which it is new.
I.e. given Th and @g[s], find the x s.t. New(Th,x,@g[s])
(Consider things like:
Why is it "x+1=0" says something about "x",
but nothing wrt "+", "0", "1", etc (or other symbols),
whereas "x+y=y+x" is, indeed, re: "+"?)

Can be used to rank novelty-ness -- based on number of interpretations removed,
weighted by ... (is this related to shannon?)
And then use this to compare novelties 
(given Th and A, of @g[s]@-[1] and @g[s]@-[2])
>> Find heuristics for deciding where to look, for maximum effect --
(based on model of this tableau)

While intensional definitions are common place for many things,
like new term, ... extensional, as here, of much use.
-see AM, or scientific work (term formation)

we now see this work as providing an important groundwork,
with which to describe various sense of newness.
Many of these are in the next section.
@BoldWord[Uses: ]
@Label[Uses]
The primary importance of this work,
as it stands,
is to provide a first stab at describing the different senses of newness.
In addition to its applications as an
analytic tool:}
 it is applicable to the following areas:
(to Learning, Knowledge Acquisition, Representation and Discourse Analysis)

@BEGIN[ITEM1,Spread=0.2]
@i{Analytic Tool:}
An adequate definition of newness
would help us identify the sources (and recipients) of novelty within
learning programs.
For example, the teacher provides the ARCH program 
(@CITE[ARCH]) with the new facts that enable it to learn.
LEX's (@CITE[Lex]) problem solver and critic are the sources of novelty for
the rest of the system.
AM (@CITE[Am]) has no clear source of novelty.
This definition may also help us understand the distinction
between compositional new terms -- such as AM's definition of prime numbers --
and other new terms, such as Bacon's (@CITE[Bacon]) use of intrinsic properties.
Finally, it may lead to a definition of learning not based exclusively on
performance.

@i{Learning and Knowledge Acquisition:}
A computable definition might suggest ways of learning a topic more effectively,
e.g. by focussing the learner's efforts on those aspects
in which he has the most potential of acquiring something new.

@i{Representation:}
E.g. one usually wants to index a proposition @g[s]
by those symbols that @g[s] is about;
and "aboutness" seems intimately tied to (i.e. is about) "newness".
(Ask why you would want to index "x+1=0" by "x" and not by "+",
whereas you associate "x+y=y+x" with "+" and not "x".)

@i{Linguistics:}
The basic purpose of communication is for the speaker, S,
to transmit a set of @i[new] facts, usually about some specific topic.
To understand this process,
we have to know what it means for a fact to be new to H;
and then how S (and H) can use this meta-fact when constructing 
(or understanding)
the message.@Foot{
This research stemmed from the authors' efforts to understand a particular type
of communication, analogy, in terms of this model.}
@END[ITEM1]

@BoldWord[Uses: ]
@Label[Uses]
The primary importance of this work,
as it stands,
is to provide a first stab at describing the different senses of newness.
(Many of these are included in the next section.)
Apart and aside from this intellectual interest,
this research could lead to some fairly tangible applications.

@BEGIN[ITEM1,Spread=0]
@i{Learning and Knowledge Acquisition:}
An adequate (and computable) definition of newness 
might suggest ways of learning a topic more effectively,
e.g. by focussing the learner's efforts on those aspects
in which he has the most potential of acquiring something new.

@i{Representation:}
E.g. one usually wants to index a proposition @g[s]
by those symbols which @g[s] is about;
and "aboutness" seems intimately tied to "newness".
(Ask why you would want to index "x+1=0" by "x" and not by "+",
whereas "x+y=y+x" by "+", not "x".)

@i{Linguistics:}
The basic purpose of communication is for the speaker, S,
to transmit a set of @i[new] facts, usually about some specific topic.
To understand this process,
we have to know what it means for a fact to be new to H;
and then how S (and H) can use this meta-fact when constructing 
(respectively decoding)
the message.@Foot{
This research stemmed from the authors' efforts to understand a particular type
of communication, analogy, in terms of this model.}
@END[ITEM1]

*** from V1 ***
There are many obvious applications
an adequate (and computable) definition of newness would have
in the areas of Learning and Knowledge Acquisition.
It might suggest ways to learn a topic more effectively,
by, for example, focussing your efforts on those aspects
in which you have the most potential of acquiring something new.
This, and several others, are discussed in detail in @Cite[NewNess].

Another set of applications stem from the area of representation.
E.g. one usually wants to index a proposition @g[s]
by those symbols which @g[s] is about;
and "aboutness" seems intimately tied to "newness".
(Consider which symbols you would want to associate "x+1=0" with,
as compared with "x+y=y+x".)

Another benefit is in terms of analyzing discourse.
The basic purpose of communication is for the speaker, S,
to transmit a set of @i[new] facts, usually about some specific topic.
To understand this process,
we have to know what it means for a fact to be new to H;
and then how S (and H) can use this meta-fact when constructing 
(respectively decoding)
the message.
Uses:
Consider the representation question,
how should one index a given proposition?
In general, one would like to be able to go from an object
(actually the symbol which denotes it)
to all the relevant facts which are @i[about] that object.
But now, how does one decide which objects that fact is about?
Realize that lexical tricks may not be sufficient:
While "x+1=0" and "x+y=y+x" are quite similar in appearance,
the first is clearly about "x",
and not about "+", "0", or "1";
while the second is, indeed, a fact about "+".
One could argue that
"x+1=0" was a new fact about "x", 
and "x+y=y+x" was new, wrt "+".
Perhaps "aboutness" is quite similar to "newness".
If so, we are faced with the general task
of finding those symbols @t<x> such that <New(Th,x,@g[s])>,
for a given @T<Th> and @T<@g[s]>.
And here an adequate specification of newness is essential.

This research was motivated by the authors' attempt to capture
what an "analogy" really is.
(To claim some communication was analogical,
it was necessary that the hearer learn 
something about one of the analogues,
something he did not know before.)
So to formally understand what should constitute a legal analogy,
we had to define what it meant for a fact to be new.
This lead to ...
It soon became clear that analogy was intimately connected 
with the phenomenon of learning:

While working on learning issues.
It became apparent that this issue was quite relevant to
much of communication.  (See Grise and friends)
That is, consider how the hearer, H, can begin to comprehend why
the speaker, S, said some sentence @g[s].
Why did S say that?
In general his goal ...
S needs to model H's understanding of the world, as desire to say something new.
Given the ability to define "degrees of novelty".

(Perhaps this could be based on which of the interpretations were removed,
weighted by appropriate factors.)
This, in turn, could help decide for which symbols
a Knowledge Base should be expanded,
and possibly in what manner.

While intensional definitions are common place for many things,
like new term, ... extensional, as here, of much use.
-see AM, or scientific work (term formation)

 "fully determined" if its interpretative range has exactly one member;
otherwise it is under-determined.},
@BoldWord[Relation, not Function]
As we saw above, there may be many different @g[s]s which satisfy
@T<New(Th,A,@g[s]>, for any @T<Th> and @T<A>.
We want to point out that a given @g[s] can be new for many different
symbols @T<x>s, for a given @T<Th>.  
(We will see some applications of this in the next Section @Ref[Uses].)
Finally, a particular @T<@g[s]> and @T<A> may be new for one theory
@T<Th@-[1]>, but not for another @T<Th@-[2]>.

@BoldWord[Fringe Cases]
While our goal was to present some straightforward instances of novelty,
we were forced to commit ourselves in all situations, including certain
fringe cases --
@T[<@g[s],A,Th>] triples which we currently accept as an instance of a newness,
but which would not trouble us had they "gone the other way" and been non-instances,
or vice-versa.

Case @Ref[AorB], from above, is an instance of this.
It is, in effect, @i[removing] @T<A>'s dependency on @T<B>.
Is that a new fact about @T<A>?
We declared it was not.
Not only did it allow our formalism to work,
but it "feels right" that any pair of equivalent sentences
(equivalent modulo @T<Th> that is)
should be either both new, or neither.
That is,
@BEGIN[Lemma]
@K[ForAll] Th,@g[s]@-[1],@g[s]@-[2],A. @~
(Th@K[PlusSign]@g[s]@-[1] @K[Eq] Th@K[PlusSign]@g[s]@-[2]) @K[Implies] @~
[New(Th,A,@g[s]@-[1]) @K[Iff] New(Th,A,@g[s]@-[2])].
@END[Lemma]

<<here: find other fringe cases>>

@BoldWord["Real World"]
This task would have much simpler, and less meaningful,
had we not fixed the domain over which terms, ... could vary.
<<here>>
We will assume that some terms are @i{a priori} fixed,
in the sense that
their referents are well defined, and unambiguous.
While essential,
this assumption is neither <major> nor <problematic>.
Oracles -- providing extra-lingual facts, making the needed connection.
Consider standard conversation -- we have to assume some common vocabulary,
and common understanding of terms, to get anywhere.
(Here we just said it formally.)
*** from LONG ***
@Chapter[Notes]@BEGIN[Enumerate]

@BEGIN[Multiple]@Tag[SemClos]
@****** Here: is this true? *****@*
We have to be very careful here.
Realize that the conditions
@BEGIN[Equation]@
@TAG[Independent]
@K[Not](T @K[RightTurnstile] @g[s]) @K[And]
@K[Not](T @K[RightTurnstile] @K[Not]@g[s])
@END[Equation]
do NOT guarantee that
@T<T' @K[Gets] T @K[Union] @K[LeftBrace] @g[s] @K[RightBrace]> will be consistent.
Let @g[s] be Godel's famous "I am unprovable" sentence and
@T<T @K[Eq] "the theory of arithmetic">.
While @Ref[Independent] holds, 
adding @g[s] to @T<T> will mean there is a proof for that @g[s],
-- i.e. @T<T' @K[RightTurnstile] @g[s]> --
contradiction.
@****** Yoram - what do I mean here?
Should I have been discussing proof procedures? *****@*
@END[Multiple]

@BEGIN[Multiple]@Tag[Univ]
<<<Yoram said I had to insist that the theory
includes some properties of the universe (the range of each
interpretation) -- in particular, its size, or at least, that it is
infinite (larger than the language -- so can add a witness, or 	
whatever...)
I still don't understand why...>>>

Use @T<Th3> as shown above, but consider the new sentence
@T<Exact@-[1]>@Foot{
See Note @Ref[Univ].},
@BEGIN[Equation]
@K[Exists] x. @K[ForAll] y. x @K[Eq] y.
@END[Equation]
rather than "@T<A@K[Eq]C>".
(This @T<Exact@-[1]> sentence produces the same semantic effect
as "@T<A@K[Eq]C>" -- in that
it forces every constant to have the same value.)
@****** Here is that flag!  
Unless I specify the universe, I'll have to have something like@*
@K[Exists] x. @K[ForAll] y. ((x @K[NEq] 0) & (x @K[NEq] 1)) @K[Implies]
(x @K[Eq] y).
*****@*


<<<USE THIS SOMEWHERE>>>
We will, for the time being, assume that the universe is fixed
BEFORE the learning episode began.
This way we will NOT need to worry about the strange implications of adding
in sentences which restrict the size of the universe; which can be complicated.
That worry above may be equivalent to claiming that the universe is of
arbitrarily large size -- i.e. there is no proof for the statement
"there are exactly <n> members", for any <n>.
<<<end>>>

Consider the new sentence @T<@K[ForAll] x. x @K[Eq] 0>,
where @T<Interesting(0)> is known.
This would mean that @T<Interesting(A)> is true as well.
Lest you feel this is merely a quantifier issue,
imagine the theory included
@BEGIN[Equation]
A @K[NEq] 0
Interesting(1),
@END[Equation]
and dealt with a universe consisting only of the elements
@T<@K[LeftBrace]@un[0,1]@K[RightBrace]>.
If we now throw in the sentence, "the universe has precisely two element",
@T<Exact@-[2]>,
@BEGIN[Equation]
@K[Exists] x,y. (x @K[NEq] y) @K[And] @K[ForAll] z. @~
(z @K[Eq] x) @K[Or] (z @K[Eq] y),
@END[Equation]
then we can deduce that, as @T<A> is not @T<0>,
it must be @T<1>, and hence @T<Interesting>.
@END[Multiple]

@END[Enumerate]
@BoldWord[Intensional, not Extensional: ]
This paper only dealt with extensional phenomena,
where novelty was determined with respect to the extensions of the symbols.
Another approach is intensional -- based on descriptions.
A related research task is to find an equivalent, but
syntactical formulation of the semantical @T<New@-[PI]> relation,
in the same manner that @T<N@-[Syn]> matched @T<N@-[Sem]>.

@BoldWord[Deductively Closed: ]
Probably the most serious criticism of this work is its dependency on
a complete deductive system, and the requirement that each theory be
deductively closed.
One purpose of a new-sounding statement might be to direct the hearer
to focus on some collection of facts he already knew,
rather to expose him to new facts.
It might be possible to enhance this formalism to handle such
resource limited deducability.@Foot{
Defining "true interpretations", within this framework, is an unsolved
(and apparently unaddressed) problem on the critical path.}
Then we could define, and address, issues like monotonic novelty, and
information obsolence.

@BoldWord[Untractability: ]
A second major criticism is that we have not shown a computable version
of this solution.
@Comment{ Is this true?
It is, of course undecidable in its full generality,
as it requires the full set of models for a theory to be enumerated.}
Even assuming access to the possible interpretations,
the tableau we have discussed is exponential in the size of the language,
with huge "constants" when the universe is large
(and gargantuan if we allow even unary predicates).
@Comment{Had been FOOT
With just @i[M] symbols, where the universe hs @i[N] members,
there can be as many of @T<(N+1)@+[M-1]> rows when all are constant symbols.
If any of the symbols is for a unary predicate, rather than a constant,
throw in a multiplicative factor of @T<2@+[N]>.}

However there may very well be simple subcases of this problem,
based on some simple assumptions.
Or there may be a body of heuristics which, while not guaranteed,
would curtail part of the explosion at the risk of declaring some new incorrectly
(or vice versa).

These issues, and many other, all seem fertile grounds for investigations.
Such explorations may very well to interesting, and usuable,
new results.
@Section[Future Work]
@Label[FutWork]

First, as we saw in the Section @Ref[Analysis4] above,
this description does indeed adequately cover the propositional case.
In fact, it is easy to show it will also cover any
predicate calculus case where the universe is fixed and finite,
and where the theory deals only with constants, @i<sans> relations.

It did depend on our notion of
"functional dependency",
which corresponded to certain changes to some row of the tableau.
We will see that this is inadequate in the general predicate calculus case.

Find an equivalent syntactic form to this one...

Range of Applicability
[Assertion vs Definitional -- we've discussed only definitional
	or Extension vs Intension
	strong Essentialist position: don't know MRG until know his hair color
  who would use this? Experimenters, going from empirical data, ...]

Types of newness:
	Intensional vs Extensional
	Definitional vs Assertional
	wrt Symbol, vs Object
We'll only deal with Extension, Definition case
?? where is new terms? ??

If "fully-determined"
(if @T<@F1[I]@-[Th][A]> is a single member)
Unique interp - if only 1 column has any 1's in it
(then nothing "new" can be said -- as could only set full row to all 0)
Ahh, but assertional!

Monotonic Novelty --

There are many other things to say about @i[new]ness,
in particular, there are many obvious criteria associated with
newness which we did not use (consider?) above.
Several of these rival theories
(for what could or should consistitute a new fact)
are presented below

Everything above ignored three important issues.
@BEGIN[Itemize]
Nowhere did we consider @T<Th@-[S]>, the facts known by the speaker.
Realize that one (rival) alternative theory for newness,
in this communicative sense,
is that the hearer truly understood a statement given by that speaker.
[complications: really based on speaker's understanding of hearer, and vice versa]
Also, what of other understandings...

"Truth" -- one would like to conclude only things which really are true
in the real world, whereas this version says nothing about this.

@BEGIN[Multiple]
Resource limited deducability --
perhaps we should be using "|~" rather than
@K[RightTurnstile], to indicate deductability, within resource limitations, ...

We have intentionally skirted the issue of efficiency,
and assumed that arbitrarily competent theorem provers are at work.
In any real application we would be dealing with resource limited processors.
Here one purpose of an analogy statement might be to direct the hearer
to focus on some collection of facts he already knew,
rather to expose him to new facts.

Again, that explantion will be the fodder for a different paper.
@END[Multiple]
@END[Itemize]
@TEXT1{@SubHeading[Acknowledgments]
We would like to especially thank
Tom Dietterich for his substantial contributions to this work.
The following people also contributed:
@Comment{Professor Douglas Lenat,}
Professor Bruce Buchanan,
@Comment{(the following logicians who helps us sort out what we should have meant:)}
Yoram Moses, Dr. Jussi Ketonen, Dr. Lew Creary,
@Comment{ and, for listening and suggesting good comments, }
Jim Davidson, Jock Mackinlay and Ben Moszkowski.
This basic research was funded by ARPA Contract #MDA903-80-C-0107.}
@Comment{ Put this somewhere!!!

Resembles information theory -- first is in terms of eliminating number
of possibilities; the second is like conditional probability (sorta)
[i.e. holding one constant, and varying everything else]

---@*
Fix up -
(1) questions to brian
(2) Tableau for Table - no "list of tables"  (leave first as table)
	also, no chapter number prefix (i.e. 3-6)

Things to read:
>>Chang/Keisler discussion of Craig Interpolation Theory (defining new terms, ...)
and Robinson consistency
>>Montague/? - on extension of theories.

Another way to show insufficiency of @T<LexInclude> as a criterion for
newness involves using a relation symbol which was not in
@F1[I], the language of the original theory, @T<T>.
@SubHeading[Case ?: Floogle(A)]
@Tag[Floogle]
Imagine hearing that @T<Floogle(A)> holds,
where you know absolutely nothing about this @T<Floogle> relation.
There are no new deductions you could make at this point,
nor could you further specify requirements for @T<A>-ness.
As this statement conveys no usable information, we want
@T<@K[Not]New(T,A"Floogle(A)")>.
(We discuss this more formally <<somewhere>>.)
}
@BEGIN[Comment]
∂20-Jan-83  1448	DIETTERICH@SUMEX-KI10 (SuNet)  	Novelty paper
To: rdg at SU-AI
cc: csd.genesereth at SU-SCORE, DIETTERICH at SUMEX-KI10

Russ,

I've decided I really don't have time to do a "blitz" on the novelty
paper.  I would be willing to edit the paper for technical points before
you submit it.  And of course we will meet on Monday again and we can
discuss what you've come up with.  However, I really don't think there
is time to do a quality job before the deadline.  The topic has a lot
of potential, and I'd certainly be interested in collaborating on a Journal-
style paper.  Topics I think need a lot more thought include: novelty
in the face of various kinds of incomplete inference engines, novelty
in non-monotonic systems, new terms.  This is starting to sound like
a major chunk of somebody's thesis: probably yours (except perhaps for the
new term stuff, which might fall into my thesis).

Thanks for asking me to participate, and good luck if you decide
to go ahead with it.

--Tom
-------

∂31-Jan-83  1629	DKANERVA@SUMEX-AIM (SuNet)  	IJCAI paper

Russ,
     I was gone most of Friday afternoon, so I couldn't do anything
on your paper.  If it's useful to you, I'll be glad to read it through
this week and give you comments.  Or you might want to wait until you
hear whether it has been accepted for publication.
     Let me know what would be helpful for you.	    --  Dianne
-------

∂TO dkanerva@sumex-aim 17:19 31-Jan-83
IJCAI Submission
Thanks anyway, Dianne.
No need to read it, now that it's already submitted.
But I'd greatly value your suggestions, in case it is accepted.

(Eventually MRG and I intend to make a journal publication of it --
and hope to get your comments on that as well.)

Thanks,
	Russ
@END[Comment]
Still to read:

Reference Logics, by Belknap, (Logician/Philosopher of UofPitt, now Institute)
	[from Bob Moore]

Extendable ...
by ? and Montague
	[from Lew Creary]

Interpretatin (in Change/Kiesler) -- wrt 
	Robinson Consistency & Craig Interpolation Theory
	[from Jussi Ketonen]

Shank, in Cog Sci
	[from MRG]